Autumn 2017
Link to my GitHub repository
https://github.com/aadomino/IODS-project
Our era of data - larger than ever and complex like chaos - requires several skills from statisticians and other data scientists. We must discover the patterns hidden behind numbers in matrices and arrays.
We are not afraid of
We want to
These are the core themes of Open Data Science and this course.
The objective of this week was learning, performing and interpreting the results of regression analysis. This part includes code, intrepretations and explanations of the results obtained with blood, sweat and tears.
The data used in this part comes from an international survey of approaches to learning - see more on this page.).
You can find the pre-processed data here: GitHub data repository.
The dataset, learning2014, consists of 166 observations and 7 variables - these are the dimensions of the data. You can see it here:
learning2014 <- read.table("C:/Users/P8Z77-V/Documents/learning2014.csv", header = TRUE, sep = "\t")
dim(learning2014)
## [1] 166 7
It is also possible to observe the data structure of the data frame:
str(learning2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: int 37 31 25 35 37 38 35 29 38 21 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
The variables occurring in the data are gender, age, attitude, deep, stra, surf, and points.
These variables are used to categorise the answers of students from the survey. The questions pertained to the students’ assessment of their deep-, strategic-, and surface learning, and the data was also collected on their age, gender, and attitude towards statistics.
The dataset does not contain the students who received 0 points from the final exam. These results have been filtered out:
learning2014 <- dplyr::filter(learning2014, points > 0)
The summary of the data includes a lot of information on all the variables. There are minimum, maximum, median and mean values of each category, and the first and the third quintiles of the data.
summary(learning2014)
## gender age attitude deep surf
## F:110 Min. :17.00 Min. :14.00 Min. :1.583 Min. :1.583
## M: 56 1st Qu.:21.00 1st Qu.:26.00 1st Qu.:3.333 1st Qu.:2.417
## Median :22.00 Median :32.00 Median :3.667 Median :2.833
## Mean :25.51 Mean :31.43 Mean :3.680 Mean :2.787
## 3rd Qu.:27.00 3rd Qu.:37.00 3rd Qu.:4.083 3rd Qu.:3.167
## Max. :55.00 Max. :50.00 Max. :4.917 Max. :4.333
## stra points
## Min. :1.250 Min. : 7.00
## 1st Qu.:2.625 1st Qu.:19.00
## Median :3.188 Median :23.00
## Mean :3.121 Mean :22.72
## 3rd Qu.:3.625 3rd Qu.:27.75
## Max. :5.000 Max. :33.00
An overview of this dataset produces a few interesting observations. We can clearly see that there are more female respondents than male ones, and the age variable represents typical age distribution of university students. Most of them are in their twenties, with a few outliers with the ages from 17 to 55. The attitude variable shows that the survey participants approach statistics with a slightly more positive than negative attitude. Deep learning is favoured more than strategic or surface learning (deep, stra and surf variables). The final exam results range from 7 to 33.
With a graphical plot we can actually visualise the variables and the relationships between them.
# Access the GGally and ggplot2 libraries.
library(GGally)
library(ggplot2)
# Read a plot matrix with ggpairs() into a variable p0. Draw.
p0 <- ggpairs(learning2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))
p0
It is a complex plot. One thing that stands out is that there is some positive correlation between exam points received by the students and their attitude towards statistics, but the correlation between exam points and other variables is generally weak. In general, the correlations between variables in our dataset are not strong.
This strong correlation makes sense: if a student has a positive outlook on statistics (as we all do), they are more likely to learn and obtain good results on the final test. It is possible to visualise this particular relation in more detail. Here, it is done by plotting the variables attitude and points as a scatterplot. The colour codes gender. Regression lines are also included:
# Access the ggplot2 library.
library(ggplot2)
# Draw the plot (p1) with our data. Define the mapping. Define the visualization type (dots) and smoothing. Add the plot title.
p1 <- ggplot(learning2014, aes(x = attitude, y = points, col = gender)) + geom_point() + geom_smooth(method = "lm") + ggtitle("Students' attitude towards statistics vs final exam points")
p1
In this part we will choose and fit a suitable regression model, which will explain the data in more detail - we want to find out which factors influence the amount of exam points received. Points is the target (dependent) variable. The previous section shows that attitude, stra and surf correlate most strongly with points. They will be our explanatory variables in this model, the summary of which is printed out below:
# Fit a regression model (m0) with multiple explanatory variables: attitude, stra, surf. Print a summary of the model.
m0 <- lm(points ~ attitude + stra + surf, data = learning2014)
summary(m0)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.01711 3.68375 2.991 0.00322 **
## attitude 0.33952 0.05741 5.913 1.93e-08 ***
## stra 0.85313 0.54159 1.575 0.11716
## surf -0.58607 0.80138 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
The results show the variables used in the model.
Residuals are assumed to be normally distributed with zero mean and constant variance. The median is indeed close to zero and the residuals seem to follow normal distribution.
Coefficients show estimated influence of the explanatory variables on the target variable - the logistic probability of the outcome for a change by one unit in explanatory variable. In other words, here, if attitude increases by 1, the logistic odds of better test points increase by 0.33952. The more excited the students are about the subject, the better chances they have to pass the final with flying colours.
The summary also shows standard error, t- and p-values and indicates the significance values. The effect of attitude on the dependent variable (exam points) is statistically significant, while stra and surf are not (p value over .5) If an explanatory variable in the model does not have a statistically significant relationship with the target variable, we remove the variable from the model and fit the model again without it. In this summary, the residuals’ median has decreased and attitude is highly statistically significant.
# Create a regression model m1 with only attitude. Print a summary of the model.
m1 <- lm(points ~ attitude, data = learning2014)
summary(m1)
##
## Call:
## lm(formula = points ~ attitude, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.63715 1.83035 6.358 1.95e-09 ***
## attitude 0.35255 0.05674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
The second summary indicates that the estimated effect of students’ attitude on exam results is 3.5255. Again, this means that for each unit increase in attitude, the exam results are also expected to increase.
The multiple R-squared value evaluates how much of the changes (variance) of the target variable is explained by the model. The rest of the variance is explained by some other factors that are not included. It could be understood as a goodness of fit measure.
The multiple R-squared is higher in the first model, even though the explanatory variables were shown to be statistically not significant and were subsequently dropped. The value increases when any variables are added to the model, irrespective of their significance.
Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage:
# Diagnostic plots using the plot() function. Choose the plots 1, 2 and 5.
par(mfrow = c(1,1))
plot(m1, which = c(1,2,5))
These plots allow us to assess if some of the assumptions we made about our linear regression model are correct.
The first plot, residuals vs fits, is a scatter plot of residuals on the y axis and fitted values (estimated responses) on the x axis. The plot is used to detect non-linearity, unequal error variances, and outliers - as simply explained here.
Our plot seems to show that residuals and the fitted values are uncorrelated, just as they should be in a linear model with normally distributed errors and constant variance. In other words, the scatter plot confirms our assumption about the error distribution and variance. Great.
The second plot is a Q-Q plot (quantile-quantile plot), which is used to assess if the target variable we took from our dataset really has the distribution we assumed in our model, which, for us, is a normal distribution. (A great source on interpreting this kind of plots can be found here).
A Q-Q plot is a scatterplot created by plotting two sets of quantiles against one another. If both sets of quantiles came from the same distribution, we should see the points forming a line that’s roughly straight.
Our plot indeed forms a straight line. Assumption confirmed.
The third plot, Residuals vs Leverage, allows us to see if the extreme values in the data influence the regression line, i.e. if the fact that we include them in our dataset influences the overall results.
The patterns in this plot are not really relevant. There are 2 things to look for: + outlying values at the upper right corner or at the lower right corner - values far away from the rest of the data points, + cases outside of the dashed red line (Cook’s distance).
In our plot, we have no influential cases. The Cook’s distance lines are not even visible, which means that all our data fits well within the lines. There are no extreme values. The plot is actually typical for the datasets with no influential cases.
Finally, some more reading I enjoyed on the subject of diagnostic plots.
The objective of this week was learning how to join together data from different sources for further analysis and analysing the results of logistic regression. This part includes code, intrepretations and explanations of the results. Definitely wasn’t easy-peasy.
Under these links my processed data in the .csv format and the script used to process the data can be found.
The data comes from The Machine Learning Repository at UCI.
> This data approach student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features and it was collected by using school reports and questionnaires. Two datasets are provided regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese (por). -source
The source data was merged; the variables not used for joining the two data have been combined by averaging (including the grade variables). Two new variables were included: alc_use, the average of ‘Dalc’ and ‘Walc’ (which describe alcohol use on weekdays and on weekends, respectively); and high_use, which is TRUE if alc_use is higher than 2 and FALSE if it is not.
alc <- read.table("C:\\Users\\P8Z77-V\\Documents\\GitHub\\IODS-project\\data\\alc.csv", header = TRUE, sep = ";")
str(alc)
## 'data.frame': 382 obs. of 35 variables:
## $ school : Factor w/ 2 levels "GP","MS": 1 1 1 1 1 1 1 1 1 1 ...
## $ sex : Factor w/ 2 levels "F","M": 1 1 1 1 1 2 2 1 2 2 ...
## $ age : int 18 17 15 15 16 16 16 17 15 15 ...
## $ address : Factor w/ 2 levels "R","U": 2 2 2 2 2 2 2 2 2 2 ...
## $ famsize : Factor w/ 2 levels "GT3","LE3": 1 1 2 1 1 2 2 1 2 1 ...
## $ Pstatus : Factor w/ 2 levels "A","T": 1 2 2 2 2 2 2 1 1 2 ...
## $ Medu : int 4 1 1 4 3 4 2 4 3 3 ...
## $ Fedu : int 4 1 1 2 3 3 2 4 2 4 ...
## $ Mjob : Factor w/ 5 levels "at_home","health",..: 1 1 1 2 3 4 3 3 4 3 ...
## $ Fjob : Factor w/ 5 levels "at_home","health",..: 5 3 3 4 3 3 3 5 3 3 ...
## $ reason : Factor w/ 4 levels "course","home",..: 1 1 3 2 2 4 2 2 2 2 ...
## $ nursery : Factor w/ 2 levels "no","yes": 2 1 2 2 2 2 2 2 2 2 ...
## $ internet : Factor w/ 2 levels "no","yes": 1 2 2 2 1 2 2 1 2 2 ...
## $ guardian : Factor w/ 3 levels "father","mother",..: 2 1 2 2 1 2 2 2 2 2 ...
## $ traveltime: int 2 1 1 1 1 1 1 2 1 1 ...
## $ studytime : int 2 2 2 3 2 2 2 2 2 2 ...
## $ failures : int 0 0 2 0 0 0 0 0 0 0 ...
## $ schoolsup : Factor w/ 2 levels "no","yes": 2 1 2 1 1 1 1 2 1 1 ...
## $ famsup : Factor w/ 2 levels "no","yes": 1 2 1 2 2 2 1 2 2 2 ...
## $ paid : Factor w/ 2 levels "no","yes": 1 1 2 2 2 2 1 1 2 2 ...
## $ activities: Factor w/ 2 levels "no","yes": 1 1 1 2 1 2 1 1 1 2 ...
## $ higher : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
## $ romantic : Factor w/ 2 levels "no","yes": 1 1 1 2 1 1 1 1 1 1 ...
## $ famrel : int 4 5 4 3 4 5 4 4 4 5 ...
## $ freetime : int 3 3 3 2 3 4 4 1 2 5 ...
## $ goout : int 4 3 2 2 2 2 4 4 2 1 ...
## $ Dalc : int 1 1 2 1 1 1 1 1 1 1 ...
## $ Walc : int 1 1 3 1 2 2 1 1 1 1 ...
## $ health : int 3 3 3 5 5 5 3 1 1 5 ...
## $ absences : int 5 3 8 1 2 8 0 4 0 0 ...
## $ G1 : int 2 7 10 14 8 14 12 8 16 13 ...
## $ G2 : int 8 8 10 14 12 14 12 9 17 14 ...
## $ G3 : int 8 8 11 14 12 14 12 10 18 14 ...
## $ alc_use : num 1 1 2.5 1 1.5 1.5 1 1 1 1 ...
## $ high_use : logi FALSE FALSE TRUE FALSE FALSE FALSE ...
The data includes 382 observations and 35 variables. The attributes have to do with the students’ background, learning outcomes, family, health, and free time activities.
It is reasonable to assume that some variables are positively correlated with some others. Judging by the DataCamp exercises completed earlier, sex, absences and failures should be correlated with alcohol consumption. I have also chosen freetime as a variable with potential to correlate with it - if the students have more free time, they are more likely to consume alcohol. Even though they shouldn’t and they know it.
So, the hypotheses - each of them assuming positive correlation - are the following:
In this part, let’s numerically and graphically explore the distributions of the chosen variables and their relationships with alcohol consumption, and see if our hypotheses hold up.
First, let’s access the libraries needed in this part.
# Access the libraries needed in this section.
library(dplyr)
##
## Attaching package: 'dplyr'
## The following object is masked from 'package:GGally':
##
## nasa
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(tidyr)
library(ggplot2)
library(boot)
Next, it’s good to have a good look at all the variables in a graphical form. This way it is easier to see the tendencies and the distribution of data.
The bar plots show that the distribution of males and females is very balanced, with slightly fewer male students.
Most of the students haven’t failed their classes, but there are some who have had one or more failures.
Free time is interestingly distributed: the students had to assess the amount of free time on a scale 1-5. Most of them said they have an average amount of free time, but there were more of those who said that they had quite a lot or very much of it.
The majority of students have less than 10 absences, but there are some outliers.
# The chosen variables: sex, failures, free time and absences.
ggplot(data = alc, aes(x = sex)) + geom_bar()
ggplot(data = alc, aes(x = failures)) + geom_bar()
ggplot(data = alc, aes(x = freetime)) + geom_bar()
ggplot(data = alc, aes(x = absences)) + geom_bar()
Alcohol consumption is generally low. The students who have admitted using a lot of alcohol constitute less than 1/3 of all students.
# Alcohol use and hign use.
ggplot(data = alc, aes(x = alc_use)) + geom_bar()
ggplot(data = alc, aes(x = high_use)) + geom_bar()
What if we combine some of the variables? Let’s see if we can get an idea about some of the assumptions made earlier.
The relation of hing alcohol use with gender seems to confirm the hypothesis - there are more males who are heavy drinkers than females.
g0 <- ggplot(data = alc, aes(x = high_use))
g0 + geom_bar() + facet_wrap("sex")
What about failures? The numbers are low, so it is difficult to assess from the bar plots, but it seems that at least for the group of students with the highest number of failures, the number of heavy drinkers surpasses the number of those who are not. Heavy drinking is a more prominent factor in groups with 1 or 2 failures than it is in the group with 0 failures.
g1 <- ggplot(data = alc, aes(x = high_use))
g1 + geom_bar() + facet_wrap("failures")
In the following plots, high_use is the target variable. For the colour visualisation, sex is used, and absences, failures and freetime are the explanatory variables.
g2 <- ggplot(alc, aes(x = high_use, col = sex, y = absences))
g2 + geom_boxplot() + ylab("absences")
g3 <- ggplot(alc, aes(x = high_use, col = sex, y = failures))
g3 + geom_boxplot() + ylab("failures")
g4 <- ggplot(alc, aes(x = high_use, col = sex, y = freetime))
g4 + geom_boxplot() + ylab("free time")
This visualisation also seems to confirm the first hypothesis - high alcohol consumption is more likely associated with male sex.
Cross tabulation lets us compare the relationship between any two variables.
Here, alc_use is the variable of interest. The other variables taken into account show definitely an upward tendency. This is consistent with our hypotheses.
alc %>% group_by(alc_use, sex) %>% summarise(count = n(), mean_absences = mean(absences), mean_failures = mean(failures), mean_freetime = mean(freetime))
## # A tibble: 17 x 6
## # Groups: alc_use [?]
## alc_use sex count mean_absences mean_failures mean_freetime
## <dbl> <fctr> <int> <dbl> <dbl> <dbl>
## 1 1.0 F 87 3.781609 0.10344828 2.885057
## 2 1.0 M 53 2.660377 0.11320755 3.603774
## 3 1.5 F 42 4.642857 0.04761905 2.833333
## 4 1.5 M 27 3.592593 0.29629630 3.111111
## 5 2.0 F 27 5.000000 0.25925926 3.222222
## 6 2.0 M 32 3.000000 0.18750000 3.281250
## 7 2.5 F 26 6.576923 0.19230769 3.307692
## 8 2.5 M 18 6.222222 0.05555556 3.222222
## 9 3.0 F 11 7.636364 0.54545455 3.272727
## 10 3.0 M 21 5.285714 0.57142857 3.428571
## 11 3.5 F 3 8.000000 0.33333333 3.666667
## 12 3.5 M 14 5.142857 0.50000000 3.714286
## 13 4.0 F 1 3.000000 0.00000000 3.000000
## 14 4.0 M 8 6.375000 0.25000000 3.500000
## 15 4.5 M 3 12.000000 0.00000000 3.333333
## 16 5.0 F 1 3.000000 0.00000000 5.000000
## 17 5.0 M 8 7.375000 0.62500000 4.000000
If high_use is the variable of interest, the tendency is also pronounced. Especially the hypothesis of heavy drinking correlating with absences stands out.
alc %>% group_by(high_use, sex) %>% summarise(count = n(), mean_absences = mean(absences), mean_failures = mean(failures), mean_freetime = mean(freetime))
## # A tibble: 4 x 6
## # Groups: high_use [?]
## high_use sex count mean_absences mean_failures mean_freetime
## <lgl> <fctr> <int> <dbl> <dbl> <dbl>
## 1 FALSE F 156 4.224359 0.1153846 2.929487
## 2 FALSE M 112 2.982143 0.1785714 3.392857
## 3 TRUE F 42 6.785714 0.2857143 3.357143
## 4 TRUE M 72 6.125000 0.3750000 3.500000
In this part, we use logistic regression to statistically explore the relationship between the chosen variables and the binary high/low alcohol consumption variable as the target variable.
# Find the model with glm()
m0 <- glm(high_use ~ sex + absences + failures + freetime, data = alc, family = "binomial")
# Present a summary of the fitted model.
summary(m0)
##
## Call:
## glm(formula = high_use ~ sex + absences + failures + freetime,
## family = "binomial", data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.9664 -0.8170 -0.6065 1.0585 2.0330
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.75479 0.46053 -5.982 2.21e-09 ***
## sexM 0.84223 0.24705 3.409 0.000652 ***
## absences 0.09461 0.02280 4.150 3.33e-05 ***
## failures 0.43004 0.19312 2.227 0.025961 *
## freetime 0.27453 0.12543 2.189 0.028621 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 419.50 on 377 degrees of freedom
## AIC: 429.5
##
## Number of Fisher Scoring iterations: 4
The results here are quite consistent with the previous observations.
The correlation between male sex and high alcohol consumption and then the correlation between absences and high alcohol consumption are very higly significant. By contrast, failures and free time are not.
Finally, it is time to present and interpret the coefficients of the model as odds ratios and deliver confidence intervals for them. Interpret the results and compare them to your previously stated hypothesis.
# Compute odds ratios (OR).
OR1 <- coef(m0) %>% exp
# Compute confidence intervals (CI).
CI1 <- confint(m0) %>% exp
## Waiting for profiling to be done...
# Print the odds ratios along with their confidence intervals.
cbind(OR1, CI1)
## OR1 2.5 % 97.5 %
## (Intercept) 0.06362265 0.02504274 0.1528626
## sexM 2.32154106 1.43694673 3.7924512
## absences 1.09922964 1.05343261 1.1522505
## failures 1.53731413 1.05427814 2.2602725
## freetime 1.31591429 1.03172262 1.6889201
I had to remind this to myself: >An odds ratio (OR) is a measure of association between an exposure and an outcome. The OR represents the odds that an outcome will occur given a particular exposure, compared to the odds of the outcome occurring in the absence of that exposure. - source
So, in this case, the odds ratio for variable sexM means the odds of a male using alcohol heavily to the odds of a female using alcohol heavily. The odd ratio is ca. 2.32 - this means that in our data, a boy is 2.32 times more likely to drink alcohol in huge amounts than a girl.
All OR results are more than 1, which means a positive correlation with high_use for all the variables that were chosen at the beginning.
Here, using the variables which, according to the logistic regression model above, had a statistical relationship with high/low alcohol consumption, we will explore the predictive power of the model. Only sex and absences had a high statistical significance, so let’s keep these.
m1 <- glm(high_use ~ sex + absences, data = alc, family = "binomial")
summary(m1)
##
## Call:
## glm(formula = high_use ~ sex + absences, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2753 -0.8753 -0.6081 1.0921 1.9920
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.83606 0.22251 -8.252 < 2e-16 ***
## sexM 0.97762 0.23982 4.076 4.57e-05 ***
## absences 0.09659 0.02306 4.189 2.80e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 430.07 on 379 degrees of freedom
## AIC: 436.07
##
## Number of Fisher Scoring iterations: 4
Below is a table of predictions compared to the actual values of the variable high_use in our new model. Two columns are added: probability including the predicted probabilities and prediction, which has value TRUE if the value of “probability” is larger than 0.5.
probabilities <- predict(m1, type = "response")
# Add the predicted probabilities to 'alc'.
alc <- mutate(alc, probability = probabilities)
# Use the probabilities to make a prediction of high_use.
alc <- mutate(alc, prediction = probability>0.5)
# Tabulate the target variable versus the predictions.
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 258 10
## TRUE 88 26
The model is accurate in (258 + 26) cases. However, it predicted the respondent to be a heavy drinker in 10 cases, when in fact the user was not, and predicted them NOT to be a heavy drinker when they were as much as 88 times.
So, what is the error in the prediction?
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2565445
The training error seems to be ca. 25%.
A lot of fun, surely.
Tasks 1-3.
First, access the necessary libraries.
# Access the needed libraries:
library(dplyr)
library(tidyr)
library(ggplot2)
library(boot)
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
library(tidyverse)
## -- Attaching packages -------------------------------------- tidyverse 1.2.1 --
## <U+221A> tibble 1.3.4 <U+221A> purrr 0.2.4
## <U+221A> readr 1.1.1 <U+221A> stringr 1.2.0
## <U+221A> tibble 1.3.4 <U+221A> forcats 0.2.0
## -- Conflicts ----------------------------------------- tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
## x MASS::select() masks dplyr::select()
library(corrplot)
## corrplot 0.84 loaded
Let’s load the Boston data from the MASS package and explore the structure and the dimensions of the data and describe the dataset.
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
The Boston data frame has 506 rows and 14 columns. It describes housing values in the suburbs of Boston.
What are the variables in the data?
colnames(Boston)
## [1] "crim" "zn" "indus" "chas" "nox" "rm" "age"
## [8] "dis" "rad" "tax" "ptratio" "black" "lstat" "medv"
The descriptions of the variables are available here. They concern such things as per capita crime rate by town, average number of rooms per dwelling, or even pupil-teacher ratio by town.
Now let’s have a look at a graphical overview of the data and show summaries of the variables in the data.
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
From the summary of the variables we can see minimum, maximum, median and mean values as well as the 1st and 3rd quartiles of the variables.
The correlations between the different variables can be studied with the help of a correlations matrix and a correlations plot.
# First calculate the correlation matrix and round it so that it includes only two digits:
cor_matrix<-cor(Boston) %>% round(digits = 2)
# Print the correlation matrix:
cor_matrix
## crim zn indus chas nox rm age dis rad tax
## crim 1.00 -0.20 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58
## zn -0.20 1.00 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31
## indus 0.41 -0.53 1.00 0.06 0.76 -0.39 0.64 -0.71 0.60 0.72
## chas -0.06 -0.04 0.06 1.00 0.09 0.09 0.09 -0.10 -0.01 -0.04
## nox 0.42 -0.52 0.76 0.09 1.00 -0.30 0.73 -0.77 0.61 0.67
## rm -0.22 0.31 -0.39 0.09 -0.30 1.00 -0.24 0.21 -0.21 -0.29
## age 0.35 -0.57 0.64 0.09 0.73 -0.24 1.00 -0.75 0.46 0.51
## dis -0.38 0.66 -0.71 -0.10 -0.77 0.21 -0.75 1.00 -0.49 -0.53
## rad 0.63 -0.31 0.60 -0.01 0.61 -0.21 0.46 -0.49 1.00 0.91
## tax 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1.00
## ptratio 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46
## black -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44
## lstat 0.46 -0.41 0.60 -0.05 0.59 -0.61 0.60 -0.50 0.49 0.54
## medv -0.39 0.36 -0.48 0.18 -0.43 0.70 -0.38 0.25 -0.38 -0.47
## ptratio black lstat medv
## crim 0.29 -0.39 0.46 -0.39
## zn -0.39 0.18 -0.41 0.36
## indus 0.38 -0.36 0.60 -0.48
## chas -0.12 0.05 -0.05 0.18
## nox 0.19 -0.38 0.59 -0.43
## rm -0.36 0.13 -0.61 0.70
## age 0.26 -0.27 0.60 -0.38
## dis -0.23 0.29 -0.50 0.25
## rad 0.46 -0.44 0.49 -0.38
## tax 0.46 -0.44 0.54 -0.47
## ptratio 1.00 -0.18 0.37 -0.51
## black -0.18 1.00 -0.37 0.33
## lstat 0.37 -0.37 1.00 -0.74
## medv -0.51 0.33 -0.74 1.00
# Visualize the correlation matrix with a correlations plot:
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)
From the plot above we can easily see which variables correlate with which and is that correlation positive (blue) or negative (red). Some observations:
Task 4
In this part, we are performing the following: * Standardize the dataset and print out summaries of the scaled data. * Create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate). * Use the quantiles as the break points in the categorical variable. * Drop the old crime rate variable from the dataset. * Divide the dataset to train and test sets, so that 80% of the data belongs to the train set.
Let’s standardize the dataset and print out summaries of the scaled data for the later classification and clustering analysis. How did the variables change?
# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
The variables are more similar in scale and weight, which makes them easier to compare and estimate. They also all have mean zero.
Create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate). This variable shows the quantiles of the scaled crime rate and is now used instead of the previous continuous one.
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
# summary of the scaled crime rate
summary(boston_scaled$crim)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -0.419367 -0.410563 -0.390280 0.000000 0.007389 9.924110
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))
# look at the table of the new factor crime
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
Let’s drop the old crime rate variable from the dataset and replace it with the new categorical variable for crime rates - for clarity:
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
Finally, the last step. 80 % of the data will become the training (train) set and the 20 % the test set. The actual predictions of new data are done with the test set.
# number of rows in the Boston dataset
n <- nrow(boston_scaled)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set
test <- boston_scaled[-ind,]
Tasks 5 and 6
Now let’s fit the linear discriminant analysis on the train set. LDA is a generalization of Fisher’s linear discriminant, a method used in statistics, pattern recognition and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events (as explained by everyone’s fav source).
We will use the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables.
# linear discriminant analysis
lda.fit <- lda(crime ~., data = train)
# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2698020 0.2524752 0.2376238 0.2400990
##
## Group means:
## zn indus chas nox rm
## low 0.9430467 -0.9080749 -0.09172814 -0.8810272 0.42355211
## med_low -0.1000849 -0.3501384 -0.04073494 -0.5965550 -0.09734902
## med_high -0.3711146 0.1347557 0.26081992 0.4177604 0.07999999
## high -0.4872402 1.0172187 -0.15056308 1.0013741 -0.41505403
## age dis rad tax ptratio
## low -0.8807905 0.8755557 -0.6879058 -0.7348638 -0.45746924
## med_low -0.3837929 0.4074958 -0.5517590 -0.5181956 -0.07093611
## med_high 0.4224969 -0.4157099 -0.4279751 -0.3499332 -0.32973898
## high 0.7966610 -0.8549578 1.6371072 1.5133254 0.77958792
## black lstat medv
## low 0.37601827 -0.75578399 0.5100570
## med_low 0.35977248 -0.16922630 0.0462142
## med_high 0.09738967 0.05918811 0.1617930
## high -0.88297794 0.94913965 -0.7721864
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.10074090 0.572638697 -1.12204829
## indus 0.07343586 -0.201211086 0.29331657
## chas -0.11825272 -0.096159004 -0.01067145
## nox 0.30768987 -0.889788711 -1.16237027
## rm -0.06735715 -0.006410732 -0.07782399
## age 0.19740299 -0.302039294 -0.08975210
## dis -0.11975619 -0.200460938 0.42423603
## rad 3.30304133 0.932680360 -0.16292472
## tax 0.07658427 0.037976810 0.59289033
## ptratio 0.09434274 -0.021938600 -0.25568960
## black -0.13360910 0.033486607 0.15583109
## lstat 0.25903614 -0.186262806 0.42984628
## medv 0.19216193 -0.433217104 -0.03501410
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9514 0.0366 0.0120
The LDA calculates the probability of a new observation being classified as belonging to each class on the basis of the trained model, and assigns every observation to the most probable class.
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 4)
Biplot is a visualisation chart that allows that allows us to clearly see some of the most outstanding or clear predictor vairables. It is clearly visible that accessibility to radial highways - rad - is the variable that is the most telling.
In order to assess the performance of the model in predicting the crime rate, let’s save the crime categories from the test set and then remove the categorical crime variable from the test dataset…
# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
…and then predict the classes with the LDA model on the test data with the predict() function, and cross tabulate the results with the crime categories from the test set:
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 9 8 1 0
## med_low 4 12 8 0
## med_high 0 10 18 2
## high 0 0 0 30
The corss tabulation of the results tells us that the model predicts crime rate in the suburbs correctly (which is to be expected, since it was such a telling feature previously); the model has some problems in separating med_low from low, but overall it performs really well.
Task 7
It’s time for data clustering. Let’s reload the Boston dataset and standardize it.
# center and standardize variables
boston_scaled <- scale(Boston)
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
The next step is to calculate the (Euclidean) distances between the observations, and to do that we’ll use a Euclidean distance matrix:
# euclidean distance matrix
dist_eu <- dist(Boston)
# look at the summary of the distances
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.119 85.624 170.539 226.315 371.950 626.047
Now let’s perform the K-means clustering with K=3 and have a look at the plot (the last 5 columns):
# k-means clustering
km <-kmeans(Boston, centers = 3)
# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)
But is it optimal? How do we know what the optimal amount of clusters is?
Let’s take the within cluster sum of squares (WCSS) and look at the changes in it depending on the number of clusters. The optimal number of clusters shows as a sharp drop in total WCSS.
set.seed(123)
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})
# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')
The optimal number of cluster seems to be 2, so let’s use that:
# k-means clustering
km <-kmeans(Boston, centers = 2)
# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)
We can also have a look at other columns:
pairs(Boston[7:14], col = km$cluster)
Again it looks like the same variables as before are the most distinctive: access to highways and property tax.
Actually the super-bonus exercise, because it’s worth more points.
Run the code below for the (scaled) train data that you used to fit the LDA. The code creates a matrix product, which is a projection of the data points.
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
Next, install and access the plotly package. Create a 3D plot (Cool!) of the columns of the matrix product by typing the code below.
# access the needed libraries:
library(plotly)
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers')
Adjust the code: add argument color as a argument in the plot_ly() function. Set the color to be the crime classes of the train set.
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crime)
Draw another 3D plot where the color is defined by the clusters of the k-means.
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = km$centers)
Hmm. This is difficult to interpret?
The objective of this week was to learn the basics of two data science based ways of reducing the dimensions, PCA and MCA.
Task 1 and 2 The original data, coming from the United Nations Development Programme, is composed of 2 data sets measuring human development and gender inequality for a set of countries (to be more precise, Human Development Index (HDI) and Gender Inequality Index (GII)). It is quite recent, as most of it comes from year 2015. The data used in this part has been modified on the basis of the original data, however.
Under these links my processed data in the .csv format and the script used to process the data can be found.
human <- read.csv(file = "C:\\Users\\P8Z77-V\\Documents\\GitHub\\IODS-project\\data\\human.csv", sep = ",", header = TRUE)
str(human)
## 'data.frame': 155 obs. of 8 variables:
## $ Edu2.FM : num 1.007 0.997 0.983 0.989 0.969 ...
## $ Labo.FM : num 0.891 0.819 0.825 0.884 0.829 ...
## $ Life.Exp : num 81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
## $ Edu.Exp : num 17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
## $ GNI : int 166 135 156 139 140 137 127 154 134 117 ...
## $ Mat.Mor : int 4 6 6 5 6 7 9 28 11 8 ...
## $ Ado.Birth: num 7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
## $ Parli.F : num 39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
dim(human)
## [1] 155 8
The data includes 8 variables (Edu2.FM, Labo.FM, Life.Exp, Edu.Exp, GNI, Mat.Mor, Ado.Birth, Parli.F) of human development and inequality for the 155 countries (Country column) that had the complete data. Each row is represented by data from a different country. These variables tell about the following phenomena, respectively:
The summary of the data shows the distributions and relations of the observations - we can observe e.g. the minimum, maximum, median and mean values of the variables:
summary(human)
## Edu2.FM Labo.FM Life.Exp Edu.Exp
## Min. :0.1717 Min. :0.1857 Min. :49.00 Min. : 5.40
## 1st Qu.:0.7264 1st Qu.:0.5984 1st Qu.:66.30 1st Qu.:11.25
## Median :0.9375 Median :0.7535 Median :74.20 Median :13.50
## Mean :0.8529 Mean :0.7074 Mean :71.65 Mean :13.18
## 3rd Qu.:0.9968 3rd Qu.:0.8535 3rd Qu.:77.25 3rd Qu.:15.20
## Max. :1.4967 Max. :1.0380 Max. :83.50 Max. :20.20
## GNI Mat.Mor Ado.Birth Parli.F
## Min. : 2.00 Min. : 1.0 Min. : 0.60 Min. : 0.00
## 1st Qu.: 53.50 1st Qu.: 11.5 1st Qu.: 12.65 1st Qu.:12.40
## Median : 99.00 Median : 49.0 Median : 33.60 Median :19.30
## Mean : 98.73 Mean : 149.1 Mean : 47.16 Mean :20.91
## 3rd Qu.:143.50 3rd Qu.: 190.0 3rd Qu.: 71.95 3rd Qu.:27.95
## Max. :194.00 Max. :1100.0 Max. :204.80 Max. :57.50
The striking impression about this overview is that there is a lot of variation between the countries, which is shown by the min and max values for the variables in the data. For example, the presence of women in the parilament ranges from 0 to 57.5%, the maternal mortality from 1 to 1100 (per 100 000 births), lthe proportion of women with a least secondary education ranges from 0.9% to 100%.
The more detailed look at the data with ggpairs function:
# Access GGally
library(GGally)
# Visualize the 'human' variables
ggpairs(human)
And a clearer look into the most prominent correlations with the correlation matrix visualisation (correlation plot):
# Calculate the correlation matrix and round it to include just 2 digits
cor_matrix<-cor(human) %>% round(digits=2)
# Visualize the correlation matrix with a correlations plot
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)
Some of the varibles have quite strong correlations with each other. For example, secondary education correlates positively with life expectancy and (quite understandably) with more years of schooling; it is however negatively correlated with adolescent births and maternal deaths. Life expectancy and expected length of education also correlate positively between themselves, and so do maternal mortality with adolescent births; these also are connected with, respectively, the high or low standard of living in a given country - GDI is correlated slightly negatively with both maternal deaths and adolescent births. It seems that a bad situation of women in a country correlates with other negative issues.
Tasks 3, 4 and 5
The next step is to perform principal component analysis (PCA) using the singular value decomposition (SVD) on the not standardized human data and show the variability captured by the principal components. Then, we will draw a biplot displaying the observations by the first two principal components (PC1 coordinate in x-axis, PC2 coordinate in y-axis), along with arrows representing the original variables.
pca_human <- prcomp(human)
summary(pca_human)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6
## Standard deviation 214.3202 54.48589 26.38141 11.47911 4.06687 1.60671
## Proportion of Variance 0.9233 0.05967 0.01399 0.00265 0.00033 0.00005
## Cumulative Proportion 0.9233 0.98298 0.99697 0.99961 0.99995 1.00000
## PC7 PC8
## Standard deviation 0.1905 0.1587
## Proportion of Variance 0.0000 0.0000
## Cumulative Proportion 1.0000 1.0000
biplot(pca_human, choices = 1:2, cex = c(0.5, 0.8), col = c("grey40", "deeppink2"), main = "Biplot, unscaled human data")
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped
It is clear that standard deviations (or variablity) of the principal components are very different in magnitude. From the summary of the PCA results and the variance captured by each component it is clear that all of the variability in the original features is captured by the first principal component.PCA is a method sensitive to the relative scaling of the original features and takes features with larger variance to be more important than features with smaller variance.
The biplot is not easy to read. Most data are at the top right corner of it. Since the variables have not been standardized, gross national income (GNI) is the only significant variable because of its large variance. No variablility is captyred by PC2 or others, so other relationships are not really shown and hence this plot is not very informative.
Let’s standardize the variables in the human data and repeat the above analysis and then see what happens.
# Scale
human_std <- scale(human)
# Estimation
pca_human <- prcomp(human_std)
summary(pca_human)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 1.966 1.1388 0.9900 0.86598 0.69931 0.54001 0.46701
## Proportion of Variance 0.483 0.1621 0.1225 0.09374 0.06113 0.03645 0.02726
## Cumulative Proportion 0.483 0.6452 0.7677 0.86140 0.92253 0.95898 0.98625
## PC8
## Standard deviation 0.33172
## Proportion of Variance 0.01375
## Cumulative Proportion 1.00000
# rounded percetanges of variance captured by each PC
pca_pr <- round(100*pca_human$importance[2, ], digits = 1)
# create object pc_lab to be used as axis labels
pc_lab <- paste0(c("Underdevelopment", "Gender equality"), " (", pca_pr, "%)")
biplot(pca_human, choices = 1:2, cex = c(0.5, 0.8), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2], main = "Biplot, scaled human data")
After scaling the variables, it becomes visible that the values of each variable are more balanced and sort of flattened, and the second biplot is much easier to interpret: the viariability of each principal component is more evenly distrubited, yet the prominence of PC1 is still visible.
The biplot could be interpreted as follows: the PC1 compoment reflects low human development with variables of low life expectancy, low education, high maternal mortality. The praliamentary participation, labour market participation and GNI are not correlated with the PC1, but are with PC2, the second component which seems to reflect gender equality. The second principal component is relatively less important (see the length of the arrows.)
Task 6
Here, we will load the tea dataset from the package Factominer and first explore the data briefly, look at the structure and the dimensions of the data and visualize it. Then we will do Multiple Correspondence Analysis on the tea data.
library(FactoMineR)
## Warning: package 'FactoMineR' was built under R version 3.4.3
# load data
data("tea")
str(tea)
## 'data.frame': 300 obs. of 36 variables:
## $ breakfast : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
## $ tea.time : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
## $ evening : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
## $ lunch : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## $ dinner : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
## $ always : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
## $ home : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
## $ work : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
## $ tearoom : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
## $ friends : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
## $ resto : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
## $ pub : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ sugar : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ where : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ price : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
## $ age : int 39 45 47 23 48 21 37 36 40 37 ...
## $ sex : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
## $ SPC : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
## $ Sport : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
## $ age_Q : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
## $ frequency : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
## $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
## $ spirituality : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
## $ healthy : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
## $ diuretic : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
## $ friendliness : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
## $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ feminine : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
## $ sophisticated : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
## $ slimming : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ exciting : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
## $ relaxing : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
## $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
dim(tea)
## [1] 300 36
This tea consumption dataset has 300 observations and 36 variables, all of which are factor or categorical variables. The variables include questions on background factors (sex, age), habits related to drinking tea (frequency, sugar, at work etc), and attitudes towards this beverage (friendliness, exciting, spirituality…).
6 variables are selected for the MCA (following DataCamp):
# column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
# select the 'keep_columns' to create a new dataset
tea_time <- dplyr::select(tea, one_of(keep_columns))
The MCA of the tea dataset:
# multiple correspondence analysis
mca <- MCA(tea_time)
# summary of the model
summary(mca)
##
## Call:
## MCA(X = tea_time)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6
## Variance 0.279 0.261 0.219 0.189 0.177 0.156
## % of var. 15.238 14.232 11.964 10.333 9.667 8.519
## Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953
## Dim.7 Dim.8 Dim.9 Dim.10 Dim.11
## Variance 0.144 0.141 0.117 0.087 0.062
## % of var. 7.841 7.705 6.392 4.724 3.385
## Cumulative % of var. 77.794 85.500 91.891 96.615 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.298 0.106 0.086 | -0.328 0.137 0.105 | -0.327
## 2 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 3 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 4 | -0.530 0.335 0.460 | -0.318 0.129 0.166 | 0.211
## 5 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 6 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 7 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 8 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 9 | 0.143 0.024 0.012 | 0.871 0.969 0.435 | -0.067
## 10 | 0.476 0.271 0.140 | 0.687 0.604 0.291 | -0.650
## ctr cos2
## 1 0.163 0.104 |
## 2 0.735 0.314 |
## 3 0.062 0.069 |
## 4 0.068 0.073 |
## 5 0.062 0.069 |
## 6 0.062 0.069 |
## 7 0.062 0.069 |
## 8 0.735 0.314 |
## 9 0.007 0.003 |
## 10 0.643 0.261 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr
## black | 0.473 3.288 0.073 4.677 | 0.094 0.139
## Earl Grey | -0.264 2.680 0.126 -6.137 | 0.123 0.626
## green | 0.486 1.547 0.029 2.952 | -0.933 6.111
## alone | -0.018 0.012 0.001 -0.418 | -0.262 2.841
## lemon | 0.669 2.938 0.055 4.068 | 0.531 1.979
## milk | -0.337 1.420 0.030 -3.002 | 0.272 0.990
## other | 0.288 0.148 0.003 0.876 | 1.820 6.347
## tea bag | -0.608 12.499 0.483 -12.023 | -0.351 4.459
## tea bag+unpackaged | 0.350 2.289 0.056 4.088 | 1.024 20.968
## unpackaged | 1.958 27.432 0.523 12.499 | -1.015 7.898
## cos2 v.test Dim.3 ctr cos2 v.test
## black 0.003 0.929 | -1.081 21.888 0.382 -10.692 |
## Earl Grey 0.027 2.867 | 0.433 9.160 0.338 10.053 |
## green 0.107 -5.669 | -0.108 0.098 0.001 -0.659 |
## alone 0.127 -6.164 | -0.113 0.627 0.024 -2.655 |
## lemon 0.035 3.226 | 1.329 14.771 0.218 8.081 |
## milk 0.020 2.422 | 0.013 0.003 0.000 0.116 |
## other 0.102 5.534 | -2.524 14.526 0.197 -7.676 |
## tea bag 0.161 -6.941 | -0.065 0.183 0.006 -1.287 |
## tea bag+unpackaged 0.478 11.956 | 0.019 0.009 0.000 0.226 |
## unpackaged 0.141 -6.482 | 0.257 0.602 0.009 1.640 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.126 0.108 0.410 |
## How | 0.076 0.190 0.394 |
## how | 0.708 0.522 0.010 |
## sugar | 0.065 0.001 0.336 |
## where | 0.702 0.681 0.055 |
## lunch | 0.000 0.064 0.111 |
# visualize MCA
plot(mca, invisible=c("ind"), habillage = "quali")
MCA can be used to detect patterns and tendencies in qualitative data (which we humanists love so much).
The MCA plots group the observations. The variables how and where (the purchase place and the packaging form) are close to each other and are more prominent than others.
The last plot shows how the variables relate to the dimensions. It can be observed that the majority of individuals are found in the middle of the plot and there are no outlier cases. Buying tea from tea shops and chain stores, buying tea bags from chain stores and buying unpacked tea from tea shops are pairs that kind of go together - these are the most popular purchase patterns. Earl Grey tea is close to milk, while green tea definitely is not.